734 research outputs found
Reply to Lee and colleagues—Viral posterior uveitis
No abstract available
The Effects of Humming and Pitch on Craniofacial and Craniocervical Morphology Measured Using MRI
Peer reviewedPreprin
Relationships Between Vocal Structures, the Airway, and Craniocervical Posture Investigated Using Magnetic Resonance Imaging
Peer reviewedPreprin
Factorised spatial representation learning: application in semi-supervised myocardial segmentation
The success and generalisation of deep learning algorithms heavily depend on
learning good feature representations. In medical imaging this entails
representing anatomical information, as well as properties related to the
specific imaging setting. Anatomical information is required to perform further
analysis, whereas imaging information is key to disentangle scanner variability
and potential artefacts. The ability to factorise these would allow for
training algorithms only on the relevant information according to the task. To
date, such factorisation has not been attempted. In this paper, we propose a
methodology of latent space factorisation relying on the cycle-consistency
principle. As an example application, we consider cardiac MR segmentation,
where we separate information related to the myocardium from other features
related to imaging and surrounding substructures. We demonstrate the proposed
method's utility in a semi-supervised setting: we use very few labelled images
together with many unlabelled images to train a myocardium segmentation neural
network. Specifically, we achieve comparable performance to fully supervised
networks using a fraction of labelled images in experiments on ACDC and a
dataset from Edinburgh Imaging Facility QMRI. Code will be made available at
https://github.com/agis85/spatial_factorisation.Comment: Accepted in MICCAI 201
Can a single image processing algorithm work equally well across all phases of DCE-MRI?
Image segmentation and registration are said to be challenging when applied
to dynamic contrast enhanced MRI sequences (DCE-MRI). The contrast agent causes
rapid changes in intensity in the region of interest and elsewhere, which can
lead to false positive predictions for segmentation tasks and confound the
image registration similarity metric. While it is widely assumed that contrast
changes increase the difficulty of these tasks, to our knowledge no work has
quantified these effects. In this paper we examine the effect of training with
different ratios of contrast enhanced (CE) data on two popular tasks:
segmentation with nnU-Net and Mask R-CNN and registration using VoxelMorph and
VTN. We experimented further by strategically using the available datasets
through pretraining and fine tuning with different splits of data. We found
that to create a generalisable model, pretraining with CE data and fine tuning
with non-CE data gave the best result. This interesting find could be expanded
to other deep learning based image processing tasks with DCE-MRI and provide
significant improvements to the models performance
- …